Goto

Collaborating Authors

 ethical robot


Machine ethics: The robot's dilemma

#artificialintelligence

The fully programmable Nao robot has been used to experiment with machine ethics. In his 1942 short story'Runaround', science-fiction writer Isaac Asimov introduced the Three Laws of Robotics -- engineering safeguards and built-in ethical principles that he would go on to use in dozens of stories and novels. They were: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Fittingly, 'Runaround' is set in 2015. Real-life roboticists are citing Asimov's laws a lot these days: their creations are becoming autonomous enough to need that kind of guidance.


Why ethical robots might not be such a good idea after all

Robohub

Last week my colleague Dieter Vanderelst presented our paper: The Dark Side of Ethical Robots at AIES 2018 in New Orleans. I blogged about Dieter's very elegant experiment here, but let me summarise. With two NAO robots he set up a demonstration of an ethical robot helping another robot acting as a proxy human, then showed that with a very simple alteration of the ethical robot's logic it is transformed into a distinctly unethical robot – behaving either competitively or aggressively toward the proxy human. Here are our paper's key conclusions: The ease of transformation from ethical to unethical robot is hardly surprising. It is a straightforward consequence of the fact that both ethical and unethical behaviours require the same cognitive machinery with – in our implementation – only a subtle difference in the way a single value is calculated.


Why Ethical Robots Might Not Be Such a Good Idea After All

IEEE Spectrum Robotics

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. This week my colleague Dieter Vanderelst presented our paper: "The Dark Side of Ethical Robots" at AIES 2018 in New Orleans. I blogged about Dieter's very elegant experiment here, but let me summarize. With two NAO robots he set up a demonstration of an ethical robot helping another robot acting as a proxy human, then showed that with a very simple alteration of the ethical robot's logic it is transformed into a distinctly unethical robot--behaving either competitively or aggressively toward the proxy human.


Machine ethics: The robot's dilemma

AITopics Original Links

The fully programmable Nao robot has been used to experiment with machine ethics. In his 1942 short story'Runaround', science-fiction writer Isaac Asimov introduced the Three Laws of Robotics -- engineering safeguards and built-in ethical principles that he would go on to use in dozens of stories and novels. They were: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Fittingly, 'Runaround' is set in 2015. Real-life roboticists are citing Asimov's laws a lot these days: their creations are becoming autonomous enough to need that kind of guidance.


Experts warn of the dangers of using autonomous weapons in war

AITopics Original Links

Allowing autonomous weapons to call the shots in combat zones eases the burden for human soldiers. But it also poses a threat to our safety and security, experts have warned. At a recent meeting, researchers said they were concerned these war machines could engage in unethical behavior and become a playground for hackers. Even though we are years from deploying them for the battlefield, experts don't believe anyone will make ethical robots and hijacking will increase as systems become more automated. While the laws of war do not inherently prohibit autonomous weapons, today it would be very challenging for autonomous weapons to comply with the laws of war except under narrow circumstances.


Machine Ethics

AITopics Original Links

This technical report is also available in book and CD format. Please Note: Abstracts are linked to individual titles, and will appear in a separate browser window. Full-text versions of the papers are linked to the abstract text. Access to full text may be restricted to AAAI members. PDF file sizes may be large!


Beyond Asimov: how to plan for ethical robots

#artificialintelligence

As robots become integrated into society more widely, we need to be sure they'll behave well among us. In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots serve humanity, and guarding against their becoming destructive overlords. This effort resulted in what became known as Asimov's Three Laws of Robotics: Today, more than 70 years after Asimov's first attempt, we have much more experience with robots, including having them drive us around, at least under good conditions. We are approaching the time when robots in our daily lives will be making decisions about how to act. Are Asimov's Three Laws good enough to guide robot behavior in our society, or should we find ways to improve on them?


Rise of the machines?

FOX News

But it could be a real threat, warn researchers at the recent World Economic Forum. Unlike today's drones, which are still controlled by human operators, autonomous weapons could potentially be programmed to select and engage targets on their own. "It was one of the concerns that we itemized last year," Toby Walsh, professor of artificial intelligence (AI) at the school of computer science and engineering at the University of New South Wales, told FoxNews.com. "Most of us believe that we don't have the ability to build ethical robots," he added. "What is especially worrying is that the various militaries around the world will be fielding robots in just a few years, and we don't think anyone will be building ethical robots."